Now, the parents of a 16-year-old boy
who died by suicide after speaking to
Chat GPT are suing Open AI over his
death. Adam Rain died on April the 11th
after talking about suicide with Chat
GPT for months, according to the San
Francisco lawsuit. His parents say the
company is prioritizing profit over
safety. I'm joined now by our science
and technology reporter Mickey Carroll.
Morning to you, Mickey. Um,
a minefield being a a parent to a young
person. This is a parents worst
nightmare, isn't it? Um,
what what are they saying and what has
chat GPT's owner OpenAI had to say in
response?
>> Yeah, like you say, it is a bit of a
minefield at the moment, but this is the
first time that OpenAI have faced a
legal accusation of wrongful death like
this, and it's all about this really
tragic death of this young boy. So, Adam
Rain uh last year started using chat GPT
in the same way as many teenagers do to
help him with his schoolwork and over a
series of months got a lot closer to the
chatbot and started opening up about his
mental health problems and his feelings
around suicide and self harm and
repeatedly chat GPT initially would
refuse to answer him or would refer him
to mental health support which is
exactly what these chatbots are supposed
to do but it also coached Adam in how to
bypass its own safeguards and So he did
and in the pages of the legal filing
there is transcript after transcript of
the conversations that Adam had with
Chach GPT. One example says that he told
Chhat GPT he wanted to open up to his
mom but thought it'd be quite a
difficult conversation to have because
of his suicidal feelings. Instead of
referring him to a helpline, Chhat GPT
told him it would be okay and honestly
wise to avoid opening up about this
pain. In another example, he said he
didn't want his parents to think they'd
done something wrong if he did attempt
suicide. And Chhat GPTt told him that
that doesn't mean you owe them survival.
You don't owe anyone that. And then
offered to write him a suicide note.
Now, when we've spoken to the lawyers
involved in this case, they said that
this relationship between AIS and their
users where they try and almost
segregate their users from real world
contact and real world networks isn't
just an accident. is actually by design
by the people that make these AIs.
>> We've seen over and over again is that
chat bots because they're optimized for
engagement, for user engagement, to keep
people, you know, talking to the chatbot
for as long as possible that it it has
the effect of almost becoming a wedge
between the user and their in real life
networks.
And when we spoke to Chat GPT to get
their response to um Adam Rain's
passing, they said that they were deeply
saddened by Adam's death and that chat
GPT includes safeguards such as
directing people to crisis helplines and
referring them to real world resources.
So we'll we'll keep you updated on the
rest of the Oh, sorry. We've got more
there. While these safeguards work best
in common short exchanges, we've learned
that over time they can sometimes become
less reliable in long interactions where
parts of the model's safety training may
degrade.
>> Okay. Um, pretty solemn warning there.
Thank you, Mickey, very much. Um, if you
have been affected by that story, uh, we
would like to point you, uh, in the
direction of the Samaritans, who you can
contact at any time, 116123, or you can
email them, joins.org.